Task: Integration Test Planning (CAD)
Planning accomplishes the initial activity of the Integration Testing for Core Asset Development (CAD).
Disciplines: Integration Testing
Purpose
Evaluate the components and modules interactions and testing their relationship, in the context of reference architecture.
Relationships
RolesPrimary Performer: Additional Performers:
InputsMandatory:
    Optional:
      Outputs
        Main Description
        The planning is responsible for resources estimate, coverage criteria, hardware and schedule, besides information about which features will or not be tested, which variation point and/or variants and the paths that will be considered in that test cycle. The risks inherent in these strategic decisions should be also considered.
        Steps
        Define and prepare the test environment

        This step focuses on tailoring what was prior defined in the Master Test Plan, regarding testing tools and other tools to support test activities, applying it to the context of integration testing.

        If any tool is not described in the Master Plan, it must be updated.

        Define White-box and Black-box Techniques

        Identify white-box and black-box techniques and/or strategies to be applied in unit testing.

        A brief description on each approach:
        In white-box testing, test cases are designed by looking at the code to detect any potential failure scenarios. The suitable input data considers the (risky) code paths that need to be tested by analyzing the source code for the application block.
        In black-box testing, no knowledge of code is necessary so that it is intended to simulate the end-user experience. Sample applications can be used to integrate and test the appalication block for black box testing. In this approach, only inputs and outputs are considered as a basis for designing test cases.

        Typical white box testing techinques include [Burnstein, 2004]:
        data flow testing - useful for revealing data flow defects;
        branch testing - useful for detecting control defects; and
        loop testing - helps to reveal loop-related defects.

        Typical black box testing techniques include [Burnstein, 2004]: 
        equivalence class partitioning - it results in a partitioning of the input domain of the unit under test. The finite number of partitions or equivalence classes that result allow the tester to select a given member of an equivalence class as a representative of that class;
        boundary value analysis - it is better run together with equivalence class partitioning. Boundary value analysis requires that the tester select elements close to the edges, so that both the upper and lower edges of an equivalence class are covered by test cases;
        cause and effect graphing - a technique that can be used to combine conditions and derive an effective set of test cases that may disclose inconsistencies in a specification;
        state transition testing - it is based upon the concepts of states and finite-state machines, and allows the tester to view the developing software in term of its states, transitions between states, and the inputs and events that trigger state changes. This view enables tester an additional opportunity to design test cases to detect defects that may not be revealed using the input/output condition as well as cause-and-effect views presented by equivalence class partitioning and cause-and-effect graphing;
        error guessing - based upon tester’s/developer’s past experience with code similar to the code-under test, and their intuition as to where defects may lurk in the code.



        Define strategy(ies) for test the integration of components and modules

        Identify a strategy which can be suitable to test integration of components.

        The most common approach that can be applied in this context is the concept of object clusters. A cluster consists of classes that are related, for example, they may work together (cooperate) to support a required functionality for the complete system [Burnstein, 2004].

        To integrate methods ans classes using the cluster approach the tester could select clusters of classes that work together to support simple functions as the first to be integrated. Then these are combined to form higher-level, or more complex, clusters that perform multiple related functions, until the system as a whole is assembled [Burnstein, 2004].

        Define the coverage criteria

        Test coverage in the test plan states what will be verified during integration testing. At this moment, test methods are selected which state how test coverage will be implemented.

        Identify features and variation points to be tested, identify features and variation points not to be tested, as well as identify critical paths and relationships. This identification is based on requirements and use cases analysis and priorization performed during RiPLE-RE. The SPL risk document should be considered during this step.

        Define the test input domain
        Determine the test input domain, in order to address the coverage criteria defined previously.
        Define the Pass/Fail criteria
        Specify the stop criteria to be used during the integration test cycle, e.g. 98% passed.
        Summarize the information in a test plan

        All the information from the previous steps are thus comprised in a test plan. A template for planning should be used.

        Review and approve the test plan

        The test plan developer or manager schedules a review meeting with the major players, reviews the plan in detail to ensure it is complete and workable, and obtains approval to proceed [Lewis, 2008]. These would encompass incorrect, incomplete, missing and inappropriate information.

        Key Considerations
        This activity will guide the Integration Test level in Core Asset Development.